Complex Tensors Almost Always Have Best Low-rank Approximations
نویسنده
چکیده
Low-rank tensor approximations are plagued by a well-known problem — a tensor may fail to have a best rank-r approximation. Over R, it is known that such failures can occur with positive probability, sometimes with certainty: in R2×2×2, every tensor of rank 3 fails to have a best rank-2 approximation. We will show that while such failures still occur over C, they happen with zero probability. In fact we establish a more general result with useful implications on recent scientific and engineering applications that rely on sparse and/or low-rank approximations: Let V be a complex vector space with a Hermitian inner product, and X be a closed irreducible complex analytic variety in V . Given any complex analytic subvariety Z ⊆ X with dimZ < dimX, we prove that a general p ∈ V has a unique best X-approximation πX(p) that does not lie in Z. In particular, it implies that over C, any tensor almost always has a unique best rank-r approximation when r is less than the generic rank. Our result covers many other notions of tensor rank: symmetric rank, alternating rank, Chow rank, Segre–Veronese rank, Segre–Grassmann rank, Segre–Chow rank, Veronese–Grassmann rank, Veronese–Chow rank, Segre–Veronese–Grassmann rank, Segre– Veronese–Chow rank, and more — in all cases, a unique best rank-r approximation almost always exist. It applies also to block-terms approximations of tensors: for any r, a general tensor has a unique best r-block-terms approximations. When applied to sparse-plus-low-rank approximations, we obtain that for any given r and k, a general matrix has a unique best approximation by a sum of a rank-r matrix and a k-sparse matrix with a fixed sparsity pattern; this arises in, for example, estimation of covariance matrices of a Gaussian hidden variable model with k observed variables conditionally independent given r hidden variables.
منابع مشابه
Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence
With the notable exceptions of two cases — that tensors of order 2, namely, matrices, always have best approximations of arbitrary low ranks and that tensors of any order always have the best rank-one approximation, it is known that high-order tensors may fail to have best low rank approximations. When the condition of orthogonality is imposed, even under the modest assumption that only one set...
متن کاملOrthogonal Rank-two Tensor Approximation: a Modified High-order Power Method and Its Convergence Analysis
With the notable exceptions that tensors of order 2, that is, matrices always have best approximations of arbitrary low ranks and that tensors of any order always have the best rank-one approximation, it is known that high-order tensors can fail to have best low rank approximations. When the condition of orthogonality is imposed, even at the most general case that only one pair of components in...
متن کاملOn the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors
Tensor decomposition has important applications in various disciplines, but it remains an extremely challenging task even to this date. A slightly more manageable endeavor has been to find a low rank approximation in place of the decomposition. Even for this less stringent undertaking, it is an established fact that tensors beyond matrices can fail to have best low rank approximations, with the...
متن کاملConvergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors
The approximation of tensors has important applications in various disciplines, but it remains an extremely challenging task. It is well known that tensors of higher order can fail to have best low-rank approximations, but with an important exception that best rank-one approximations always exists. The most popular approach to low-rank approximation is the alternating least squares (ALS) method...
متن کاملBest subspace tensor approximations
In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given tensor by a tensor that is sparsely representable. For matrices, i.e. 2-tensors, such a representation can be obtained via the singular value decomposition which allows to compute the best rank k approximations. For t-tensors with t > 2 many generalizations of the singular val...
متن کامل